Skip to content

Conversation

acisseJZhong
Copy link
Contributor

@acisseJZhong acisseJZhong commented Feb 2, 2025

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.

Changelog

What are the changes made in this PR?
Enabled TP training on top of FSDP. This is both from user request #2280, and also to improve memory so that we can get rid of fsdp_cpu_offload when training larger model.

Note:

  • TP + FSDP doesn't work with fused optimizer yet(optimizer.fused needs to be False).
  • Applying TP needs to happen after AC since AC changes the fqn. (this causes a few hours debugging :( )

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

Running TP = 2, FSPD = 4:
image
image
Running TP = 1, FSDP = 8:
image
image

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Feb 2, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2330

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 5ab0933 with merge base fb52557 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 2, 2025
@codecov-commenter
Copy link

codecov-commenter commented Feb 2, 2025

Codecov Report

Attention: Patch coverage is 8.77193% with 52 lines in your changes missing coverage. Please review.

Project coverage is 65.62%. Comparing base (fb52557) to head (5ab0933).
Report is 195 commits behind head on main.

Files with missing lines Patch % Lines
recipes/full_finetune_distributed.py 0.00% 26 Missing ⚠️
tests/recipes/test_full_finetune_distributed.py 16.66% 20 Missing ⚠️
torchtune/training/_distributed.py 14.28% 6 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            main    #2330       +/-   ##
==========================================
+ Coverage   8.63%   65.62%   +56.98%     
==========================================
  Files        311      358       +47     
  Lines      18848    21393     +2545     
==========================================
+ Hits        1628    14039    +12411     
+ Misses     17220     7354     -9866     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@acisseJZhong acisseJZhong changed the title [WIP] TP + FSDP distributed training TP + FSDP distributed training (full finetuning) Feb 4, 2025
Copy link
Member

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good - Can I see some tests + output though?

fsdp_kwargs = {"reshard_after_forward": reshard_after_forward}
if cpu_offload:
fsdp_kwargs["offload_policy"] = CPUOffloadPolicy()
if dp_mesh:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually think we can just do fsdp_kwargs["mesh"] = dp_mesh since a None will just get passed through and the sharding function can handle that itself.

utils.log_rank_zero(
log,
"FSDP is enabled. Instantiating model and loading checkpoint on Rank 0 ...",
"FSDP and TP is enabled. Instantiating model and loading checkpoint on Rank 0 ...",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Distributed training is enabled. Instantiating model etc etc..."


# Apply tensor parallelism to the model
if self.tensor_parallel_dim > 1:
if self.parallelize_plan is None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would move this error up to init. Would be a shame to get all this way and then fail.

raise ValueError(
"Parallelism plan need to be provided when tensor parallel is enabled."
)
tp_mesh = self.device_mesh["tp"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: just use this the same way you do with dp e.g. don't create a new variable for tp_mesh, just pass it into the function.

if self._compile:
training.compile_model(model, verbose=self._is_rank_zero)

self.device_mesh = dist.init_device_mesh(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather explicitly save what we need here. So instead of putting the whole device_mesh on the object, later on if data_parallel_dim > 1, save self.local_size = dp_mesh.size() and self.local_rank = dp_mesh.get_local_rank() else self.local_size = self.world_size and self.local_rank = self.rank.

Does that make sense?

@nazrak-atlassian nazrak-atlassian mentioned this pull request Feb 7, 2025
13 tasks
Copy link
Member

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks awesome! Just need to make sure the merge changes are correct.

utils.log_rank_zero(
log,
"FSDP is enabled. Instantiating model and loading checkpoint on Rank 0 ...",
"Distributed training(FSDP and TP) is enabled. Instantiating model and loading checkpoint on Rank 0 ...",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we don't really know if TP and FSDP are enabled at this point. TP is only for 70B models right now looks like. Could be confusing to users.

@joecummings joecummings merged commit 9da35c7 into meta-pytorch:main Feb 10, 2025
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants